Goto

Collaborating Authors

 activation point




On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

Safran, Itay, Vardi, Gal, Lee, Jason D.

arXiv.org Artificial Intelligence

We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with $r$ neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most $\mathcal{O}(r)$ linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is $\tilde{\mathcal{O}}(r)$ and independent of the sample size.


Kinks In the Works

#artificialintelligence

Note that for each neuron in our network, we were able to define another hinge point, or "kink" in the line, that got added to the function so that the slope of the prediction line would change based on the sum of all of the neurons. In this instance, we had two neurons, and because their activation points were different (x 5 and x 14), this resulted in two kinks at those points. While this is a significantly more complicated prediction function than a standard linear classifier could produce, it still might not be complex enough. If we wanted to get another kink in our predictions, we could accomplish this by simply adding a third neuron. For each new neuron in our network, we're able to add another kink to our prediction line, with the slope at each kink changing based on the activation point (determined by the ReLU activation function.)


Kinks In the Works

#artificialintelligence

Note that for each neuron in our network, we were able to define another hinge point, or "kink" in the line, that got added to the function so that the slope of the prediction line would change based on the sum of all of the neurons. In this instance, we had two neurons, and because their activation points were different (x 5 and x 14), this resulted in two kinks at those points. While this is a significantly more complicated prediction function than a standard linear classifier could produce, it still might not be complex enough. If we wanted to get another kink in our predictions, we could accomplish this by simply adding a third neuron. For each new neuron in our network, we're able to add another kink to our prediction line, with the slope at each kink changing based on the activation point (determined by the ReLU activation function.)


Next-Active-Object prediction from Egocentric Videos

Furnari, Antonino, Battiato, Sebastiano, Grauman, Kristen, Farinella, Giovanni Maria

arXiv.org Artificial Intelligence

Although First Person Vision systems can sense the environment from the user's perspective, they are generally unable to predict his intentions and goals. Since human activities can be decomposed in terms of atomic actions and interactions with objects, intelligent wearable systems would benefit from the ability to anticipate user-object interactions. Even if this task is not trivial, the First Person Vision paradigm can provide important cues to address this challenge. We propose to exploit the dynamics of the scene to recognize next-active-objects before an object interaction begins. We train a classifier to discriminate trajectories leading to an object activation from all others and forecast next-active-objects by analyzing fixed-length trajectory segments within a temporal sliding window. The proposed method compares favorably with respect to several baselines on the Activity of Daily Living (ADL) egocentric dataset comprising 10 hours of videos acquired by 20 subjects while performing unconstrained interactions with several objects.